创建能够证明终身学习的人工智能(AI)系统是一个基本挑战,并且已经提出了许多方法和指标来分析算法属性。但是,对于现有的终身学习指标,算法贡献被任务和场景结构混淆。为了减轻此问题,我们引入了一种算法 - 敏捷的可解释的替代模型方法,以估计终身学习算法的潜在特性。我们验证通过合成数据实验估算这些特性的方法。为了验证替代模型的结构,我们分析了来自流行的终身学习方法和基准的真实绩效数据,这些基线适用于终身分类和终身强化学习。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
深度学习是高能物理学领域的标准工具,可促进许多分析策略的敏感性增强。特别是,在识别物理对象(例如喷气味标记)时,复杂的神经网络体系结构起着重要作用。但是,这些方法依赖于准确的模拟。不隔材料会导致需要测量和校准的数据的性能差异不可忽略。我们研究了对输入数据的分类器响应,并通过应用对抗性攻击来探测风味标记算法的脆弱性。随后,我们提出了一种对抗性训练策略,以减轻这种模拟攻击的影响并改善分类器的鲁棒性。我们研究了性能与脆弱性之间的关系,并表明该方法构成了一种有希望的方法,可以减少对差建模的脆弱性。
translated by 谷歌翻译
随着点云上的3D对象检测依赖于点之间的几何关系,非标准对象形状可以妨碍方法的检测能力。然而,在安全关键环境中,在分销外和长尾样品上的鲁棒性是对规避危险问题的基础,例如损坏或稀有汽车的误读。在这项工作中,我们通过在训练期间考虑到变形的点云来大大改善3D对象探测器的概括到域名数据。我们通过3D-VFIEL实现这一点:一种新的方法,可以通过越野时代的载体衡量物体。我们的方法将3D点限制以沿着传感器视图幻灯片幻灯片,而既不添加也不添加它们中的任何一个。所获得的载体是可转移的,独立于样的和保持形状平滑度和闭塞。通过在训练期间使用这些载体场产生的变形来增强正常样本,我们显着改善了对不同形状物体的鲁棒性,例如损坏/变形汽车,即使仅在基蒂训练。为此,我们提出并分享开源Crashd:现实损坏和稀有汽车的合成数据集,具有各种碰撞情景。在Kitti,Waymo,我们的Crashd和Sun RGB-D上进行了广泛的实验,表明了我们对室内和室外场景的域外数据,不同型号和传感器,即LIDAR和TOF相机的技术的高度普遍性。我们的crashd数据集可在https://crashd-cars.github.io上获得。
translated by 谷歌翻译
最近在图像重建之前被引入了深度图像。它表示要作为深度卷积神经网络的输出恢复的图像,并学习网络的参数,使得输出适合损坏的观察。尽管它令人印象深刻的重建属性,但与学到的学习或传统的重建技术相比,该方法缓慢。我们的工作开发了一个两阶段学习范式来解决计算挑战:(i)我们在合成数据集上执行网络的监督预测;(ii)我们微调网络的参数,以适应目标重建。我们展示了预先预测的预测,从实际测量的生物样本的实际微型计算机断层扫描数据中提高了随后的重建。代码和附加实验材料可在https://educateddip.github.io/docs.educated_deep_image_prior/处获得。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
This article concerns Bayesian inference using deep linear networks with output dimension one. In the interpolating (zero noise) regime we show that with Gaussian weight priors and MSE negative log-likelihood loss both the predictive posterior and the Bayesian model evidence can be written in closed form in terms of a class of meromorphic special functions called Meijer-G functions. These results are non-asymptotic and hold for any training dataset, network depth, and hidden layer widths, giving exact solutions to Bayesian interpolation using a deep Gaussian process with a Euclidean covariance at each layer. Through novel asymptotic expansions of Meijer-G functions, a rich new picture of the role of depth emerges. Specifically, we find that the posteriors in deep linear networks with data-independent priors are the same as in shallow networks with evidence maximizing data-dependent priors. In this sense, deep linear networks make provably optimal predictions. We also prove that, starting from data-agnostic priors, Bayesian model evidence in wide networks is only maximized at infinite depth. This gives a principled reason to prefer deeper networks (at least in the linear case). Finally, our results show that with data-agnostic priors a novel notion of effective depth given by \[\#\text{hidden layers}\times\frac{\#\text{training data}}{\text{network width}}\] determines the Bayesian posterior in wide linear networks, giving rigorous new scaling laws for generalization error.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译